Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Review on interpretability of deep learning
Xia LEI, Xionglin LUO
Journal of Computer Applications    2022, 42 (11): 3588-3602.   DOI: 10.11772/j.issn.1001-9081.2021122118
Abstract1411)   HTML72)    PDF (1703KB)(967)       Save

With the widespread application of deep learning, human beings are increasingly relying on a large number of complex systems that adopt deep learning techniques. However, the black?box property of deep learning models offers challenges to the use of these models in mission?critical applications and raises ethical and legal concerns. Therefore, making deep learning models interpretable is the first problem to be solved to make them trustworthy. As a result, researches in the field of interpretable artificial intelligence have emerged. These researches mainly focus on explaining model decisions or behaviors explicitly to human observers. A review of interpretability for deep learning was performed to build a good foundation for further in?depth research and establishment of more efficient and interpretable deep learning models. Firstly, the interpretability of deep learning was outlined, the requirements and definitions of interpretability research were clarified. Then, several typical models and algorithms of interpretability research were introduced from the three aspects of explaining the logic rules, decision attribution and internal structure representation of deep learning models. In addition, three common methods for constructing intrinsically interpretable models were pointed out. Finally, the four evaluation indicators of fidelity, accuracy, robustness and comprehensibility were introduced briefly, and the possible future development directions of deep learning interpretability were discussed.

Table and Figures | Reference | Related Articles | Metrics
Design and implementation of multi-policy security model for Nutos operating system
XIA Lei HUANG Hao Shuying YU Zhiqiang WANG
Journal of Computer Applications   
Abstract1524)            Save
Increasing diversity and complexity of the computing environments result in various security requirements. MLS security policy only aims at confidentiality assurance, in less consideration of integrity assurance and weakness in channel control. To handle that the trusted subjects have many security shortcomings of MLS model, a multipolicy views security model (MPVSM) was presented. Based on the MLS model, MPVSM combined the domain and type attributes to the model, to enforce the channel control policy, make permission management more finegrained and enhance the ability to confine the permission of the trusted subjects. MPVSM was also able to enforce multipolicy views in operating system in a flexible way. The implementation of the MPVSM model in our prototype trusted operating system Nutos was also introduced.
Related Articles | Metrics